Speech Recognition for the iCub Platform

نویسندگان

  • Bertrand Higy
  • Alessio Mereta
  • Giorgio Metta
  • Leonardo Badino
چکیده

This paper describes open source software (available at https://github.com/robotology/ natural-speech) to build automatic speech recognition (ASR) systems and run them within the YARP platform. The toolkit is designed (i) to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii) to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human–iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab) is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: “articulatory” and “unsupervised” speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the “unsupervised” systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems). To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Database for Automatic Persian Speech Emotion Recognition: Collection, Processing and Evaluation

Abstract   Recent developments in robotics automation have motivated researchers to improve the efficiency of interactive systems by making a natural man-machine interaction. Since speech is the most popular method of communication, recognizing human emotions from speech signal becomes a challenging research topic known as Speech Emotion Recognition (SER). In this study, we propose a Persian em...

متن کامل

Real-Time 3D Stereo Tracking and Localizing of Spherical Objects with the iCub Robotic Platform

Visual pattern recognition is a basic capability of many species in nature. The skill of visually recognizing and distinguishing different objects in the surrounding environment gives rise to the development of sensory-motor maps in the brain, with the consequent capability of object reaching and manipulation. This paper presents the implementation of a real-time tracking algorithm for followin...

متن کامل

On the Robustness of Speech Emotion Recognition for Human-Robot Interaction with Deep Neural Networks

Speech emotion recognition (SER) is an important aspect of effective human-robot collaboration and received a lot of attention from the research community. For example, many neural network-based architectures were proposed recently and pushed the performance to a new level. However, the applicability of such neural SER models trained only on indomain data to noisy conditions is currently under-...

متن کامل

Teaching iCub to recognize objects using deep Convolutional Neural Networks

Providing robots with accurate and robust visual recognition capabilities in the real-world today is a challenge which prevents the use of autonomous agents for concrete applications. Indeed, the majority of tasks, as manipulation and interaction with other agents, critically depends on the ability to visually recognize the entities involved in a scene. At the same time, computer vision systems...

متن کامل

Real-world Object Recognition with Off-the-shelf Deep Conv Nets: How Many Objects can iCub Learn?

The ability to visually recognize objects is a fundamental skill for robotics systems. Indeed, a large variety of tasks involving manipulation, navigation or interaction with other agents, deeply depends on the accurate understanding of the visual scene. Yet, at the time being, robots are lacking good visual perceptual systems, which often become the main bottleneck preventing the use of autono...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Front. Robotics and AI

دوره 2018  شماره 

صفحات  -

تاریخ انتشار 2018